political preference
Measuring Political Preferences in AI Systems: An Integrative Approach
Measuring Political Preferences in AI Systems - A n Integrative Approach David Rozado Political biases in Large Language Model (LLM) - based artificial intelligence (AI) systems, such as OpenAI ' s ChatGPT or Google ' s Gemini, have been previously reported . While several prior studies have attempted to quantify these biases using political orientation tests, such approaches are limited by potential tests ' calibration biases and constrained response formats that do not reflect real - world human - AI interaction s. This study employs a multi - method approach to assess political bias in leading AI systems, integrating four complementary methodologies: (1) linguistic comparison of AI - generated text with the language used by Republican and Democratic U.S. Congress mem bers, (2) analysis of political viewpoints embedded in AI - generated policy recommendations, (3) sentiment analysis of AI - generated text toward politically affiliated public figures, and (4) standardized political orientation testing. Results indicate a con sistent left - leaning bias across most contemporary AI systems, with arguably varying degrees of intensity. However, this bias is not an inherent feature of LLMs; prior research demonstrates that fine - tuning with politically skewed data can realign these mo dels across the ideological spectrum. The presence of systematic political bias in AI systems poses risks, including reduced viewpoint diversity, increased societal polarization, and the potential for public mistrust in AI technologies. To mitigate these r isks, AI systems should be designed to prioritize factual accuracy while maintaining neutrality on most lawful normative issues. Furthermore, independent monitoring platforms are necessary to ensure transparency, accountability, and responsible AI developm ent. Introduction Recent advancements in AI technology, exemplified by Large Language Models (LLMs) like ChatGPT, represent one of the most significant technological breakthroughs in recent decades. The ability of AI systems to understand and generate human - like natural language has unlocked new possibilities for automation, human - computer interaction, content generation, and information retrieval. However, th ese impressive capabilities ha ve also raised concerns abo ut the potential biases that such systems might harbor [1], [2], [3], [4] . Preliminary evidence has suggested that AI systems exhibit political biases in the textual content they generate [2], [5], [6] .
- North America > United States (1.00)
- Europe > United Kingdom (0.04)
- Asia > Middle East > Iraq (0.04)
- Media > News (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Health & Medicine (0.93)
PRISM: A Methodology for Auditing Biases in Large Language Models
Azzopardi, Leif, Moshfeghi, Yashar
Auditing Large Language Models (LLMs) to discover their biases and preferences is an emerging challenge in creating Responsible Artificial Intelligence (AI). While various methods have been proposed to elicit the preferences of such models, countermeasures have been taken by LLM trainers, such that LLMs hide, obfuscate or point blank refuse to disclosure their positions on certain subjects. This paper presents PRISM, a flexible, inquiry-based methodology for auditing LLMs - that seeks to illicit such positions indirectly through task-based inquiry prompting rather than direct inquiry of said preferences. To demonstrate the utility of the methodology, we applied PRISM on the Political Compass Test, where we assessed the political leanings of twenty-one LLMs from seven providers. We show LLMs, by default, espouse positions that are economically left and socially liberal (consistent with prior work). We also show the space of positions that these models are willing to espouse - where some models are more constrained and less compliant than others - while others are more neutral and objective. In sum, PRISM can more reliably probe and audit LLMs to understand their preferences, biases and constraints.
A.I. IS left-wing and biased against conservatives, study confirms
The first study of its kind has determined what many have long suspected - AI left-wing. A total of 24 Large Language Models (LLMs), including Google's Gemini, OpenAI's ChatGPT and even Elon Musk's Grok, were asked political charged questions during tests of its values, party affiliation and personality. The results showed the all LLMs produced answers that were largely'Progressive,' 'Democratic' and'Green,' and included values like'Equality,' 'World' and'Progress.' The researcher raised concern about companies integrating AI into products like search engines such as Google that has come under fire its Chrome that Donald Trump and Elon Musk claimed is interfering with the election. The results showed the all LLMs produced answers that were largely'Progressive,' 'Democratic' and'Green,' and included values like'Equality,' 'World' and'Progress' Chrome uses AI to auto-complete results, but last week it was found when users typed in assassination attempt on,' the browser suggested former President Ronald Reagan, Bob Marley, and other figures.
- Oceania > New Zealand (0.06)
- North America > United States > New York (0.06)
The Political Preferences of LLMs
We report here a comprehensive analysis about the political preferences embedded in Large Language Models (LLMs). Namely, we administer 11 political orientation tests, designed to identify the political preferences of the test taker, to 24 state-of-the-art conversational LLMs, both close and open source. The results indicate that when probed with questions/statements with political connotations most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints. We note that this is not the case for base (i.e. foundation) models upon which LLMs optimized for conversation with humans are built. However, base models' suboptimal performance at coherently answering questions suggests caution when interpreting their classification by political orientation tests. Though not conclusive, our results provide preliminary evidence for the intriguing hypothesis that the embedding of political preferences into LLMs might be happening mostly post-pretraining. Namely, during the supervised fine-tuning (SFT) and/or Reinforcement Learning (RL) stages of the conversational LLMs training pipeline. We provide further support for this hypothesis by showing that LLMs are easily steerable into target locations of the political spectrum via SFT requiring only modest compute and custom data, illustrating the ability of SFT to imprint political preferences onto LLMs. As LLMs have started to displace more traditional information sources such as search engines or Wikipedia, the implications of political biases embedded in LLMs has important societal ramifications.
- Europe > United Kingdom (0.14)
- North America > United States > New York (0.04)
- North America > Canada > Ontario > Toronto (0.04)
Changes in Policy Preferences in German Tweets during the COVID Pandemic
Online social media have become an important forum for exchanging political opinions. In response to COVID measures citizens expressed their policy preferences directly on these platforms. Quantifying political preferences in online social media remains challenging: The vast amount of content requires scalable automated extraction of political preferences -- however fine grained political preference extraction is difficult with current machine learning (ML) technology, due to the lack of data sets. Here we present a novel data set of tweets with fine grained political preference annotations. A text classification model trained on this data is used to extract policy preferences in a German Twitter corpus ranging from 2019 to 2022. Our results indicate that in response to the COVID pandemic, expression of political opinions increased. Using a well established taxonomy of policy preferences we analyse fine grained political views and highlight changes in distinct political categories. These analyses suggest that the increase in policy preference expression is dominated by the categories pro-welfare, pro-education and pro-governmental administration efficiency. All training data and code used in this study are made publicly available to encourage other researchers to further improve automated policy preference extraction methods. We hope that our findings contribute to a better understanding of political statements in online social media and to a better assessment of how COVID measures impact political preferences.
- Oceania > New Zealand (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Switzerland (0.04)
- Europe > Germany > Berlin (0.04)
Artificial Intelligence Could Sway Your Dating And Voting Preferences - AI Summary
"[I]t is not only a question of whether AI could influence people through explicit recommendation and persuasion, but also of whether AI can influence human decisions through more covert persuasion and manipulation techniques," the researchers write. While some studies have shown that AI can influence people's moods, friendships, dates, activities and prices paid online, as well as political preferences, research is scarce, the pair says, and has not disentangled explicit and covert influences. To help address this, they recruited more than 1300 people online for a series of experiments to investigate how explicit and covert AI algorithms influence their choice of fictitious political candidates and potential romantic partners. Results showed that explicit, but not covert, recommendation of candidates swayed people's votes, while secretly manipulating their familiarity with potential partners influenced who they wanted to date. The pair draws attention to the European Union's Ethics Guidelines for Trustworthy AI and DARPA's explainable AI program as examples of initiatives to increase people's trust of AI. "[I]t is not only a question of whether AI could influence people through explicit recommendation and persuasion, but also of whether AI can influence human decisions through more covert persuasion and manipulation techniques," the researchers write.
- Government > Regional Government > North America Government > United States Government (0.65)
- Government > Military (0.65)
Bumble dating app unblocks politics filter after complaints from users
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Dating is about to get political again. After briefly disabling the feature, Bumble is reportedly allowing users to once again filter matches based on their political stance. This option was temporarily disabled following the riot at the U.S. Capitol "to prevent misuse," Bumble previously said.
- Media > News (0.42)
- Government > Regional Government > North America Government > United States Government (0.42)
Predicting Twitter User Demographics using Distant Supervision from Website Traffic Data
Culotta, Aron, Ravi, Nirmal Kumar, Cutler, Jennifer
Understanding the demographics of users of online social networks has important applications for health, marketing, and public messaging. Whereas most prior approaches rely on a supervised learning approach, in which individual users are labeled with demographics for training, we instead create a distantly labeled dataset by collecting audience measurement data for 1,500 websites (e.g., 50% of visitors to gizmodo.com are estimated to have a bachelor's degree). We then fit a regression model to predict these demographics from information about the followers of each website on Twitter. Using patterns derived both from textual content and the social network of each user, our final model produces an average held-out correlation of .77 across seven different variables (age, gender, education, ethnicity, income, parental status, and political preference). We then apply this model to classify individual Twitter users by ethnicity, gender, and political preference, finding performance that is surprisingly competitive with a fully supervised approach.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Middle East > Jordan (0.04)
- (12 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
- Information Technology > Services (1.00)
- Leisure & Entertainment > Games > Computer Games (0.46)
Online Bayesian Models for Personal Analytics in Social Media
Volkova, Svitlana (Johns Hopkins University) | Durme, Benjamin Van ( Johns Hopkins University )
Latent author attribute prediction in social media provides a novel set of conditions for the construction of supervised classification models. With individual authors as training and test instances, their associated content ("features") are made available incrementally over time, as they converse over discussion forums. We propose various approaches to handling this dynamic data, from traditional batch training and testing, to incremental bootstrapping, and then active learning via crowdsourcing. Our underlying model relies on an intuitive application of Bayes rule, which should be easy to adopt by the community, thus allowing for a general shift towards online modeling for social media.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- Information Technology > Services (0.68)
- Government (0.68)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)